3 research outputs found
Deep Learning for Temporal Logics
Temporal logics are a well established formal specification paradigm to specify the behavior of systems, and serve as inputs to industrial-strength verification tools. We report on current advances in applying deep learning to temporal logical reasoning tasks, showing that models can even solve instances where competitive classical algorithms timed out
Teaching Temporal Logics to Neural Networks
We study two fundamental questions in neuro-symbolic computing: can deep learning tackle challenging problems in logics end-to-end, and can neural networks learn the semantics of logics. In this work we focus on linear-time temporal logic (LTL), as it is widely used in verification. We train a Transformer on the problem to directly predict a solution, i.e. a trace, to a given LTL formula. The training data is generated with classical solvers, which, however, only provide one of many possible solutions to each formula. We demonstrate that it is sufficient to train on those particular solutions to formulas, and that Transformers can predict solutions even to formulas from benchmarks from the literature on which the classical solver timed out. Transformers also generalize to the semantics of the logics: while they often deviate from the solutions found by the classical solvers, they still predict correct solutions to most formulas
Generating Symbolic Reasoning Problems with Transformer GANs
We study the capabilities of GANs and Wasserstein GANs equipped with
Transformer encoders to generate sensible and challenging training data for
symbolic reasoning domains. We conduct experiments on two problem domains where
Transformers have been successfully applied recently: symbolic mathematics and
temporal specifications in verification. Even without autoregression, our GAN
models produce syntactically correct instances. We show that the generated data
can be used as a substitute for real training data when training a classifier,
and, especially, that training data can be generated from a dataset that is too
small to be trained on directly. Using a GAN setting also allows us to alter
the target distribution: We show that by adding a classifier uncertainty part
to the generator objective, we obtain a dataset that is even harder to solve
for a temporal logic classifier than our original dataset